4 research outputs found

    Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators

    Get PDF
    This article presents a state-of-the-art survey on the robotic systems, sensors, actuators, and collaborative strategies for physical human-robot collaboration (pHRC). This article starts with an overview of some robotic systems with cutting-edge technologies (sensors and actuators) suitable for pHRC operations and the intelligent assist devices employed in pHRC. Sensors being among the essential components to establish communication between a human and a robotic system are surveyed. The sensor supplies the signal needed to drive the robotic actuators. The survey reveals that the design of new generation collaborative robots and other intelligent robotic systems has paved the way for sophisticated learning techniques and control algorithms to be deployed in pHRC. Furthermore, it revealed the relevant components needed to be considered for effective pHRC to be accomplished. Finally, a discussion of the major advances is made, some research directions, and future challenges are presented

    A Two-Stream CNN Framework for American Sign Language Recognition Based on Multimodal Data Fusion

    No full text
    At present, vision-based hand gesture recognition is very important in human-robot interaction (HRI). This non-contact method enables natural and friendly interaction between people and robots. Aiming at this technology, a two-stream CNN framework (2S-CNN) is proposed to recognize the American sign language (ASL) hand gestures based on multimodal (RGB and depth) data fusion. Firstly, the hand gesture data is enhanced to remove the influence of background and noise. Secondly, hand gesture RGB and depth features are extracted for hand gesture recognition using CNNs on two streams, respectively. Finally, a fusion layer is designed for fusing the recognition results of the two streams. This method utilizes multimodal data to increase the recognition accuracy of the ASL hand gestures. The experiments prove that the recognition accuracy of 2S-CNN can reach 92.08 %\% on ASL fingerspelling database and is higher than that of baseline methods
    corecore